Hybrid Batch Bayesian Optimization
نویسندگان
چکیده
Bayesian Optimization (BO) aims at optimizing an unknown function that is costly to evaluate. We focus on applications where concurrent function evaluations are possible. In such cases, BO could choose to either sequentially evaluate the function (sequential mode) or evaluate the function at a batch of multiple inputs at once (batch mode). The sequential mode generally leads to better optimization performance as each function evaluation is selected with more information, whereas the batch mode is more time efficient (smaller number of iterations). Our goal is to combine the strength of both settings. We systematically analyze BO using a Gaussian Process as the posterior estimator and provide a hybrid algorithm that dynamically switches between sequential and batch with variable batch sizes. We theoretically justify our algorithm and present experimental results on eight benchmark BO problems. The results show that our method achieves substantial speedup (up to 78%) compared to sequential, without suffering any significant performance loss.
منابع مشابه
The Parallel Knowledge Gradient Method for Batch Bayesian Optimization
In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural network architectures in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm — the parallel knowledge gradient method. By construction, this method provides the one-step Bayes opti...
متن کاملBudgeted Batch Bayesian Optimization With Unknown Batch Sizes
Parameter settings profoundly impact the performance of machine learning algorithms and laboratory experiments. The classical grid search or trial-error methods are exponentially expensive in large parameter spaces, and Bayesian optimization (BO) offers an elegant alternative for global optimization of black box functions. In situations where the black box function can be evaluated at multiple ...
متن کاملDistributed Batch Gaussian Process Optimization
This paper presents a novel distributed batch Gaussian process upper confidence bound (DB-GP-UCB) algorithm for performing batch Bayesian optimization (BO) of highly complex, costly-to-evaluate black-box objective functions. In contrast to existing batch BO algorithms, DBGP-UCB can jointly optimize a batch of inputs (as opposed to selecting the inputs of a batch one at a time) while still prese...
متن کاملParallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions
We develop parallel predictive entropy search (PPES), a novel algorithm for Bayesian optimization of expensive black-box objective functions. At each iteration, PPES aims to select a batch of points which will maximize the information gain about the global maximizer of the objective. Well known strategies exist for suggesting a single evaluation point based on previous observations, while far f...
متن کاملDynamic On-line Re-optimization Control of A Batch MMA Polymerization Reactor Using Hybrid Neural Network Models
A hybrid neural network model based on-line re-optimization control strategy is developed for a batch polymerization reactor. To address the difficulties in batch polymerization reactor modeling, the hybrid neural network model contains a simplified mechanistic model covering material balance assuming perfect temperature control, and recurrent neural networks modeling the residuals of the simpl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1202.5597 شماره
صفحات -
تاریخ انتشار 2012